Player-compatible learning and player-compatible equilibrium

نویسندگان

چکیده

Player-Compatible Equilibrium (PCE) imposes cross-player restrictions on the magnitudes of players' “trembles” onto different strategies. These capture idea that trembles correspond to deliberate experiments by agents who are unsure prevailing distribution play. PCE selects intuitive equilibria in a number examples where trembling-hand perfect equilibrium (Selten, 1975) and proper (Myerson, 1978) have no bite. We show rational learning weighted fictitious play imply our compatibility steady-state setting.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Player-Compatible Equilibrium∗

We define Player-Compatible Equilibrium or “PCE,” which imposes cross-player restrictions on magnitudes of the players’ “trembles” onto different actions. These restrictions are inspired by the idea that trembles correspond to deliberate experiments by inexperienced agents who are unsure of the prevailing distribution of strategies in opponent populations. We show that PCE selects the “intuitiv...

متن کامل

Incentive Compatible Two Player Cake Cutting

We characterize methods of dividing a cake between two bidders in a way that is incentive-compatible and Pareto-efficient. In our cake cutting model, each bidder desires a subset of the cake (with a uniform value over this subset), and is allocated some subset. Our characterization proceeds via reducing to a simple one-dimensional version of the problem, and yields, for example, a tight bound o...

متن کامل

Two-player incentive compatible mechanisms are affine maximizers

In mechanism design, for a given type space, there may be incentive compatible mechanisms which are not affine maximizers. We prove that for two-player games on a discrete type space, any given mechanism can be turned into an affine maximizer through a nontrivial perturbation of the type space. Furthermore, our theorem is the strongest possible in this setup. Our proof relies on new results on ...

متن کامل

Compatible Reward Inverse Reinforcement Learning

PROBLEM • Inverse Reinforcement Learning (IRL) problem: recover a reward function explaining a set of expert’s demonstrations. • Advantages of IRL over Behavioral Cloning (BC): – Transferability of the reward. • Issues with some IRL methods: – How to build the features for the reward function? – How to select a reward function among all the optimal ones? – What if no access to the environment? ...

متن کامل

Active Learning for Player Modeling

Learning models of player behavior has been the focus of several studies. This work is motivated by better understanding of player behavior, a knowledge that can ultimately be employed to provide player-adapted or personalized content. In this paper, we propose the use of active learning for player experience modeling. We use a dataset from hundreds of players playing Infinite Mario Bros. as a ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Economic Theory

سال: 2021

ISSN: ['1095-7235', '0022-0531']

DOI: https://doi.org/10.1016/j.jet.2021.105238